AI Assistant: Return Warning when Notes Data Exceeds Token Limits

AI Assistant

What's new?

The AI Assistant now returns a warning message when Notes data exceeds token limits. If this occurs, some or all of the data may be excluded from the request sent to the LLM. The Assistant will still return a response, but with limited context. The warning message reads:

"The request has exceeded the max token size. Historical conversation context or additional prompt data may have been truncated. The response has exceeded your LLM's token limit and has been truncated."

Why does it matter?

This clear warning gives users better visibility into why a response may have reduced context when a large request causes data to be truncated, allowing them to adjust their input as needed.

How do I enable this?

This update will be enabled automatically for customers with AI Assistant.

This is part of the BH R2025.9 release. Click here for release calendar details.